Goto

Collaborating Authors

 educational outcome


Artificial Intelligence for Optimal Learning: A Comparative Approach towards AI-Enhanced Learning Environments

Hariharan, Ananth

arXiv.org Artificial Intelligence

In the rapidly evolving educational landscape, the integration of technology has shifted from an enhancement to a cornerstone of educational strategy worldwide. This transition is propelled by advancements in digital technology, especially the emergence of artificial intelligence as a crucial tool in learning environments. This research project critically evaluates the impact of three distinct educational settings: traditional educational methods without technological integration, those enhanced by non-AI technology, and those utilising AI-driven technologies. This comparison aims to assess how each environment influences educational outcomes, engagement, pedagogical methods, and equity in access to learning resources, and how each contributes uniquely to the learning experience. The ultimate goal of this research is to synthesise the strengths of each model to create a more holistic educational approach. By integrating the personal interaction and tested pedagogical techniques of traditional classrooms, the enhanced accessibility and collaborative tools offered by non-AI technology, and the personalised, adaptive learning strategies enabled by AI-driven technologies, education systems can develop richer, more effective learning environments. This hybrid approach aims to leverage the best elements of each setting, thereby enhancing educational outcomes, engagement, and inclusiveness, while also addressing the distinct challenges and limitations inherent in each model. The intention is to create an educational framework deeply attentive to the diverse needs of students, ensuring equitable access to high-quality education for all.


A Novel Psychometrics-Based Approach to Developing Professional Competency Benchmark for Large Language Models

Kardanova, Elena, Ivanova, Alina, Tarasova, Ksenia, Pashchenko, Taras, Tikhoniuk, Aleksei, Yusupova, Elen, Kasprzhak, Anatoly, Kuzminov, Yaroslav, Kruchinskaia, Ekaterina, Brun, Irina

arXiv.org Artificial Intelligence

The era of large language models (LLM) raises questions not only about how to train models, but also about how to evaluate them. Despite numerous existing benchmarks, insufficient attention is often given to creating assessments that test LLMs in a valid and reliable manner. To address this challenge, we accommodate the Evidence-centered design (ECD) methodology and propose a comprehensive approach to benchmark development based on rigorous psychometric principles. In this paper, we have made the first attempt to illustrate this approach by creating a new benchmark in the field of pedagogy and education, highlighting the limitations of existing benchmark development approach and taking into account the development of LLMs. We conclude that a new approach to benchmarking is required to match the growing complexity of AI applications in the educational context. We construct a novel benchmark guided by the Bloom's taxonomy and rigorously designed by a consortium of education experts trained in test development. Thus the current benchmark provides an academically robust and practical assessment tool tailored for LLMs, rather than human participants. Tested empirically on the GPT model in the Russian language, it evaluates model performance across varied task complexities, revealing critical gaps in current LLM capabilities. Our results indicate that while generative AI tools hold significant promise for education - potentially supporting tasks such as personalized tutoring, real-time feedback, and multilingual learning - their reliability as autonomous teachers' assistants right now remain rather limited, particularly in tasks requiring deeper cognitive engagement.


chatGPT for generating questions and assessments based on accreditations

Aboalela, Rania Anwar

arXiv.org Artificial Intelligence

This research aims to take advantage of artificial intelligence techniques in producing students assessment that is compatible with the different academic accreditations of the same program. The possibility of using generative artificial intelligence technology was studied to produce an academic accreditation compliant test the National Center for Academic Accreditation of Kingdom of Saudi Arabia and Accreditation Board for Engineering and Technology. A novel method was introduced to map the verbs used to create the questions introduced in the tests. The method allows a possibility of using the generative artificial intelligence technology to produce and check the validity of questions that measure educational outcomes. A questionnaire was distributed to ensure that the use of generative artificial intelligence to create exam questions is acceptable by the faculty members, as well as to ask about the acceptance of assistance in validating questions submitted by faculty members and amending them in accordance with academic accreditations. The questionnaire was distributed to faculty members of different majors in the Kingdom of Saudi Arabias universities. one hundred twenty responses obtained with eight five percentile approval percentage for generate complete exam questions by generative artificial intelligence . Whereas ninety eight percentage was the approval percentage for editing and improving already existed questions.


Bayesian adaptive and interpretable functional regression for exposure profiles

Gao, Yunan, Kowal, Daniel R.

arXiv.org Machine Learning

Pollutant exposure during gestation is a known and adverse factor for birth and health outcomes. However, the links between prenatal air pollution exposures and educational outcomes are less clear, in particular the critical windows of susceptibility during pregnancy. Using a large cohort of students in North Carolina, we study the link between prenatal daily $\mbox{PM}_{2.5}$ exposure and 4th end-of-grade reading scores. We develop and apply a locally adaptive and highly scalable Bayesian regression model for scalar responses with functional and scalar predictors. The proposed model pairs a B-spline basis expansion with dynamic shrinkage priors to capture both smooth and rapidly-changing features in the regression surface. The model is accompanied by a new decision analysis approach for functional regression that extracts the critical windows of susceptibility and guides the model interpretations. These tools help to identify and address broad limitations with the interpretability of functional regression models. Simulation studies demonstrate more accurate point estimation, more precise uncertainty quantification, and far superior window selection than existing approaches. Leveraging the proposed modeling, computational, and decision analysis framework, we conclude that prenatal $\mbox{PM}_{2.5}$ exposure during early and late pregnancy is most adverse for 4th end-of-grade reading scores.


Computationally Identifying Funneling and Focusing Questions in Classroom Discourse

Alic, Sterling, Demszky, Dorottya, Mancenido, Zid, Liu, Jing, Hill, Heather, Jurafsky, Dan

arXiv.org Artificial Intelligence

Responsive teaching is a highly effective strategy that promotes student learning. In math classrooms, teachers might "funnel" students towards a normative answer or "focus" students to reflect on their own thinking, deepening their understanding of math concepts. When teachers focus, they treat students' contributions as resources for collective sensemaking, and thereby significantly improve students' achievement and confidence in mathematics. We propose the task of computationally detecting funneling and focusing questions in classroom discourse. We do so by creating and releasing an annotated dataset of 2,348 teacher utterances labeled for funneling and focusing questions, or neither. We introduce supervised and unsupervised approaches to differentiating these questions. Our best model, a supervised RoBERTa model fine-tuned on our dataset, has a strong linear correlation of .76 with human expert labels and with positive educational outcomes, including math instruction quality and student achievement, showing the model's potential for use in automated teacher feedback tools. Our unsupervised measures show significant but weaker correlations with human labels and outcomes, and they highlight interesting linguistic patterns of funneling and focusing questions. The high performance of the supervised measure indicates its promise for supporting teachers in their instruction.


How Can AI Improve Educational Outcomes in the United States?

#artificialintelligence

Advances in artificial intelligence (AI) are creating opportunities to improve K-12 education. Personalized learning applications can increase student engagement in the classroom and close learning gaps, while AI tools can help teachers reduce their workloads, design better interventions, and reduce burnout. But there are a number of technical, operational, and social challenges that stand in the way of widespread AI usage in schools. Moreover, policymakers have not yet embraced a strategic vision of AI to ensure effective deployment of the technology in the classroom. Join the Center for Data Innovation for a panel discussion about the ways policymakers can address existing concerns while supporting AI use by students, teachers, and administrators.


Testing Educational Digital Games

Communications of the ACM

Lamont A. Flowers (lflower@clemson.edu) is the Distinguished Professor of Educational Leadership in the Department of Educational and Organizational Leadership Development in the College of Education and the Executive Director of the Charles H. Houston Center for the Study of the Black Experience in Education in the Division of Inclusion and Equity at Clemson University, Clemson, SC, USA.


AI Foretells Student's Educational Outcomes based on Social Media posts

#artificialintelligence

Predicting student's outcomes is at the heart of the most educational institutions. Parents are no different when it comes to knowing about the future their children will pursue. Artificial Intelligence (AI) and its applications have helped education sector figure out a lot of study related outcomes that will help students follow the right dream. Artificial intelligence models are considered to be outstanding when doing predictive analysis. The technology anticipates the future by analyzing past data.


Artificial Intelligence can predict students' educational outcomes based on tweets- Edexlive

#artificialintelligence

You may not need to ask teachers how your kid is performing in studies as his or her tweets will be enough to gauge whether he or she will make it big in the future or not, thanks to Artificial Intelligence (AI). A team of Russian researchers has used AI-based models to predict high academic achievers from lower ones based on their social media posts. The prediction model uses a mathematical textual analysis that registers users' vocabulary (its range and the semantic fields from which concepts are taken), characters and symbols, post length and word length. Every word has its own rating (a kind of IQ). Scientific and cultural topics, English words, and words and posts that are longer in length rank highly and serve as indicators of good academic performance.


Artificial intelligence can predict students' educational outcomes based on tweets

#artificialintelligence

Ivan Smirnov, Leading Research Fellow of the Laboratory of Computational Social Sciences at the Institute of Education of HSE University, has created a computer model that can distinguish high academic achievers from lower ones based on their social media posts. The prediction model uses a mathematical textual analysis that registers users' vocabulary (its range and the semantic fields from which concepts are taken), characters and symbols, post length, and word length. Every word has its own rating (a kind of IQ). Scientific and cultural topics, English words, and words and posts that are longer in length rank highly and serve as indicators of good academic performance. An abundance of emojis, words or whole phrases written in capital letters, and vocabulary related to horoscopes, driving, and military service indicate lower grades in school.